Parameter Convergence for Em and Mm Algorithms
نویسندگان
چکیده
It is well known that the likelihood sequence of the EM algorithm is nondecreasing and convergent (Dempster, Laird and Rubin (1977)), and that the limit points of the EM algorithm are stationary points of the likelihood (Wu (1982)), but the issue of the convergence of the EM sequence itself has not been completely settled. In this paper we close this gap and show that under general, simple, verifiable conditions, any EM sequence is convergent. In pathological cases we show that the sequence is cycling in the limit among a finite number of stationary points with equal likelihood. The results apply equally to the optimization transfer class of algorithms (MM algorithm) of Lange, Hunter, and Yang (2000). Two different EM algorithms constructed on the same dataset illustrate the convergence and the cyclic behavior.
منابع مشابه
EM vs MM: A case study
The celebrated expectation-maximization (EM) algorithm is one of the most widely used optimization methods in statistics. In recent years it has been realized that EM algorithm is a special case of the more general minorization-maximization (MM) principle. Both algorithms creates a surrogate function in the first (E or M) step that is maximized in the second M step. This two step process always...
متن کاملConvergence of a semi-analytical method on the fuzzy linear systems
In this paper, we apply the homotopy analysis method (HAM) for solving fuzzy linear systems and present the necessary and sufficient conditions for the convergence of series solution obtained via the HAM. Also, we present a new criterion for choosing a proper value of convergence-control parameter $hbar$ when the HAM is applied to linear system of equations. Comparisons are made between the ...
متن کاملAsymptotic Convergence Properties of EM Type Algorithms
We analyze the asymptotic convergence properties of a general class of EM type algorithms for es timating an unknown parameter via alternating estimation and maximization As examples this class includes ML EM penalized ML EM Green s OSL EM and many other approximate EM al gorithms A theorem is given which provides conditions for monotone convergence with respect to a given norm and speci es an ...
متن کاملStatistica Sinica 5(1995), 41-54 CONVERGENCE IN NORM FOR ALTERNATING EXPECTATION-MAXIMIZATION (EM) TYPE ALGORITHMS
We provide a su cient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng an...
متن کاملPX × AI : algorithmics for better convergence in restricted maximum likelihood estimation
INTRODUCTION Maximising the (log) likelihood (logL) in restricted maximum likelihood (REML) estimation of variance components almost invariably represents a constrained optimisation problem. Iterative algorithms available to solve this problem differ substantially in computational resources needed, ease of implementation, sensitivity to choice of starting values and rates of convergence. One of...
متن کامل